44 research outputs found
SymED: Adaptive and Online Symbolic Representation of Data on the Edge
The edge computing paradigm helps handle the Internet of Things (IoT)
generated data in proximity to its source. Challenges occur in transferring,
storing, and processing this rapidly growing amount of data on
resource-constrained edge devices. Symbolic Representation (SR) algorithms are
promising solutions to reduce the data size by converting actual raw data into
symbols. Also, they allow data analytics (e.g., anomaly detection and trend
prediction) directly on symbols, benefiting large classes of edge applications.
However, existing SR algorithms are centralized in design and work offline with
batch data, which is infeasible for real-time cases. We propose SymED -
Symbolic Edge Data representation method, i.e., an online, adaptive, and
distributed approach for symbolic representation of data on edge. SymED is
based on the Adaptive Brownian Bridge-based Aggregation (ABBA), where we assume
low-powered IoT devices do initial data compression (senders) and the more
robust edge devices do the symbolic conversion (receivers). We evaluate SymED
by measuring compression performance, reconstruction accuracy through Dynamic
Time Warping (DTW) distance, and computational latency. The results show that
SymED is able to (i) reduce the raw data with an average compression rate of
9.5%; (ii) keep a low reconstruction error of 13.25 in the DTW space; (iii)
simultaneously provide real-time adaptability for online streaming IoT data at
typical latencies of 42ms per symbol, reducing the overall network traffic.Comment: 14 pages, 5 figure
Addressing Application Latency Requirements through Edge Scheduling
Abstract
Latency-sensitive and data-intensive applications, such as IoT or mobile services, are leveraged by Edge computing, which extends the cloud ecosystem with distributed computational resources in proximity to data providers and consumers. This brings significant benefits in terms of lower latency and higher bandwidth. However, by definition, edge computing has limited resources with respect to cloud counterparts; thus, there exists a trade-off between proximity to users and resource utilization. Moreover, service availability is a significant concern at the edge of the network, where extensive support systems as in cloud data centers are not usually present. To overcome these limitations, we propose a score-based edge service scheduling algorithm that evaluates network, compute, and reliability capabilities of edge nodes. The algorithm outputs the maximum scoring mapping between resources and services with regard to four critical aspects of service quality. Our simulation-based experiments on live video streaming services demonstrate significant improvements in both network delay and service time. Moreover, we compare edge computing with cloud computing and content delivery networks within the context of latency-sensitive and data-intensive applications. The results suggest that our edge-based scheduling algorithm is a viable solution for high service quality and responsiveness in deploying such applications
On the Future of Cloud Engineering
Ever since the commercial offerings of the Cloud started appearing in 2006, the landscape of cloud computing has been undergoing remarkable changes with the emergence of many different types of service offerings, developer productivity enhancement tools, and new application classes as well as the manifestation of cloud functionality closer to the user at the edge. The notion of utility computing, however, has remained constant throughout its evolution, which means that cloud users always seek to save costs of leasing cloud resources while maximizing their use. On the other hand, cloud providers try to maximize their profits while assuring service-level objectives of the cloud-hosted applications and keeping operational costs low. All these outcomes require systematic and sound cloud engineering principles. The aim of this paper is to highlight the importance of cloud engineering, survey the landscape of best practices in cloud engineering and its evolution, discuss many of the existing cloud engineering advances, and identify both the inherent technical challenges and research opportunities for the future of cloud computing in general and cloud engineering in particular
Cloud computing: survey on energy efficiency
International audienceCloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions